专利摘要:
The invention relates to a method for detecting the emotional state of a person by a robot, the robot comprising a situation manager, which is subdivided into a situation network for determining needs and an action network for determining measures to meet the needs, a scheduler for prioritizing Measures proposed by the situation manager and optionally an input device, and a sensor for detecting an event. Both the situation network and the action network are based on probabilistic models. The subdivision of the situation manager into a situation network and an action network means that the calculation of the appropriate measure for a specific situation is not based directly on the actual data, but rather on the calculation of the needs of the specific situation. Furthermore, the invention relates to a robot for carrying out the method.
公开号:CH713933A2
申请号:CH00775/18
申请日:2017-10-07
公开日:2018-12-28
发明作者:Hans Rudolf Früh Dr;Keusch Dominik;Trüssel David;Meyer Raphael
申请人:Zhongrui Funing Robotics Shenyang Co Ltd;
IPC主号:
专利说明:

description
Background of the Invention Human tasks in personal care are increasingly being replaced by autonomous care robots that help meet the needs of daily life in the hospital or in home care. This applies in particular to the care of people with mental or cognitive impairments or illnesses, e.g. with dementia. Nursing robots are equipped with devices for collecting information about the person in need of care and the service environment, i.e. Sensors, microphone, camera or intelligent devices related to the Internet of Things and means for carrying out measures, i.e. Devices for gripping, moving, communicating. The interaction of the human robot is achieved through intelligent functions such as speech recognition or the recognition of facial expressions or tactile patterns. These functions can also be imitated by a robot in the care situation, e.g. through speech or gesture generation or the generation of emotional feedback.
For robot-assisted care, it is a challenge to determine the actual needs of the person in need of care and the service environment and to take the appropriate measures. The needs of the person include hunger, thirst, the desire for rest, emotional attention or social interaction. The needs of the service environment include, for example, the need to clear the table or tidy up the kitchen or replenish the refrigerator. The corresponding measures are those that meet the needs. In general, the needs and measures can not only be determined based on the actual situation, but also depend on the history of the needs.
SUMMARY OF THE INVENTION The invention relates to a method for detecting the emotional state of a person by a robot, the robot comprising a situation manager, which is divided into a situation network to determine the needs and an action network to determine the measures to meet the needs, a planner for prioritizing measures proposed by the situation manager and optionally by an input device and comprising a sensor for detecting an event. Both the situation network and the action network are based on probability models. The division of the situation manager into a situation network and an action network means that the calculation of the appropriate measure for a specific situation is not based directly on the actual data, but on the calculation of the needs of the specific situation.
Needs of the person in need of care include hunger, thirst, the desire for rest or the desire for emotional attention. The needs of the service environment include clearing the table, tidying up the kitchen or refilling the refrigerator.
Measures to meet the needs are, for example, bringing an object to the person, taking it away from the person, giving emotional feedback through speech generation or emotional image presentation, clearing the table or tidying up the kitchen.
The situation manager according to the present invention is divided into a situation network and an action network. The situation network is designed as an artificial neural network for making decisions about the situation-related needs, i.e. the needs in a specific situation. The situation-related needs represent the cumulative needs of the person in need of care and the service environment measured over time, i.e. the situation-related needs are based on the history of the needs.
The action network is an artificial neural network that derives the appropriate measures for the situation-related needs. Both the situation network and the action network are based on a probability model.
The subdivision of the situation manager into a situation network and an action network means that the calculation of the suitable measures for a specific situation is not based directly on the actual data, but rather on the calculation of the needs of the specific situation.
The situation manager receives input from an information pool. The information pool includes signals from sensors and the Internet of Things (ΙοΤ devices), a user database and a history. Sensors according to the present invention are for example a microphone, e.g. for capturing speech patterns, a camera, e.g. for capturing facial expression patterns, or a touchpad with tactile sensors, e.g. for capturing tactile patterns of the person. The signals detected by the sensor can be analyzed by voice recognition, facial expression recognition or recognition of tactile patterns.
A ΙοΤ device is, for example, a refrigerator with sensors for checking the expiration date of its content. The user DB is a collection of information about the people in need of care, such as their names, the current emotional state or the position in the room. The history contains the historical data of the sensors and loT channels, but also personal data, for example the history of the emotional state and the history of the actions
CH 713 933 A2 of the robot. The information pool also has access to the communication channels of the Open Platform, for example to receive information about the battery status of the robot.
Before information from the information pool can be used by the situation manager, it must go through the feature preparation. The feature preparation refers to the classification of the analyzed patterns, for example by comparing the patterns with personalized patterns in the user DB in order to derive the emotional state of the person or to detect temporal developments in the signals from ΙοΤ devices.
When prioritizing measures, the planner takes decisions of the situation manager and / or data from input devices such as a user input device, an appointment planner or an emergency control into account. An input device is a device for ordering a measure directly by the user, for example a button for ordering a specific maintenance action. The scheduler is a schedule of measures that have to be carried out regularly, for example serving the food, bringing the medication. The emergency control is able to recognize undesired or negative events, e.g. Signs of rejection or resistance to the nursing robot or a low battery status. The emergency control has access to the information pool.
The prioritization by the planner has the effect, for example, of following the current measure, i.e. continue to assign the highest priority to suspending the current measure, i.e. Assign it a lower priority, cancel the current measure, i.e. delete them from the list of measures, start a new measure or resume a previously interrupted measure.
[0014] The method for controlling the activities of a robot according to the present invention comprises the following steps:
Step 1: Detect a signal using a sensor. This step captures a signal or pattern that relates to the patient or service environment. The signals or signal patterns relate, for example, to a position signal, a speech pattern, an image pattern, a tactile pattern. If the signal pattern relates to a tactile pattern, the sensor is a tactile sensor, which is located, for example, in a touchpad of the robot. If an emotional state pattern is recognized with the aid of the sensor, the sensor is a microphone for recording a speech pattern and / or a camera for recording a facial expression pattern.
Step 2: analyze the signal. Through this step, the detected signal or pattern is interpreted or evaluated in an aggregated manner, for example to extract features using time series. If the signal patterns relate to a tactile pattern, this step interprets the detected tactile pattern, for example in order to extract features using time series. If an emotional state pattern is detected by this step, the detected emotional state pattern is interpreted, for example in order to extract features using time series.
Step 3: classify the signal. This step classifies the analyzed features, for example by comparing the patterns with personalized patterns in the user DB, in order to derive the emotional state of the person, or to detect temporal developments of signals from ΙοΤ devices. If the signal patterns relate to a tactile pattern, the tactile pattern is classified using personalized tactile patterns. This step classifies the extracted features, for example by comparing the tactile patterns with the personalized tactile patterns in the user DB. If an emotional state pattern is detected, the emotional state pattern is classified using personalized emotional state patterns. In this way, the extracted features are classified by this step, for example by comparing the emotional state patterns with personalized emotional state patterns in the user DB.
Step 4: Determine the needs of the person and the service environment with the help of the situation network. This step calculates the needs of the situation based on information from the information pool. The situation network is designed as an artificial neural network based on a probability model. The situation-related needs represent the cumulative needs of the person in need of care and the service environment measured over time. The calculation of the situation-related needs by the artificial neural network is therefore not only based on the actual needs, but also on the history of the needs.
Step 5: Determination of the measures to meet the needs determined by the situation network. This step calculates the appropriate measures for the needs of the situation. The action network is designed as an artificial neural network based on a probability model.
Step 6: Determination of measures that are triggered by an input device. This step defines the measures that are triggered by an input device. An input device is, for example, a button to arrange a specific maintenance action or a scheduler to trigger measures that have to be carried out regularly, or an emergency control.
Step 7: Prioritization of the measures by the planner. This step prioritizes measures according to an urgency level, for example from highest to lowest priority: (1) emergency measures, (2) measures ordered by the input device, (3) scheduled measures, (4) measures suggested by the situation manager.
CH 713 933 A2 Step 8: Execution of the measure with the highest priority. In this step the most urgent measure is carried out.
Step 9: Repeat steps (1) to (9) until a stop condition is reached. This step causes the robot to always do everything until it is stopped by an external stop command.
[0024] According to one embodiment of the invention, the input device is a user input device and / or an organizer and / or an emergency control.
According to a preferred embodiment of the invention, the situation network and / or the action network is based on a probability model.
According to an important embodiment of the invention, the situation manager receives information from an information pool, the information pool being directed to a sensor and / or to the Internet of Things and / or to a user database and / or to a history and / or communication channels who uses the Open Platform.
[0027] According to a further embodiment of the invention, the information that the situation manager receives from the information pool is classified by a Merkmais preparation task.
The invention also relates to a robot for carrying out the method described, the robot comprising a planner for prioritizing tasks that are received by a situation manager and optionally by an input device. The situation manager is divided into a situation network to determine needs and an action network to determine measures to meet needs.
[0029] According to one embodiment, the input device is a user input device and / or an organizer and / or an emergency control.
According to a preferred embodiment, the situation network and / or the action network are based on a probability model.
According to an important embodiment, the situation manager receives information from an information pool, the information pool using a sensor and / or the Internet of Things and / or a user database and / or a history and / or communication channels of the Open Platform ,
[0032] According to a further embodiment, the information that the situation manager receives from the information pool is classified by a Merkmais preparation task.
According to a very important embodiment, the sensor has an area of at least 16 mm 2. As a result, for example, the tactile pattern can be detected well by the sensor.
[0034] Finally, the sensor can be embedded in a soft, tactile envelope of the robot. This also allows e.g. the tactile pattern can be detected well by the sensor.
Brief Description of the Drawings
1 is a diagram illustrating the information flow and the decision flow of the robot according to the present invention.
Fig. 2a is a flowchart showing the operation of the robot in the monitoring mode.
2b is a flowchart showing the work flow of the robot in the tactile interaction mode.
2c is a flowchart showing the work flow of the robot in social interaction mode.
1 shows the information flow and the decision flow of the nursing robot. The core component of
Nursing robot is a planner. The job of the planner is to prioritize actions and to call the execution of measures in a specific care situation.
Measures include changing the position, bringing or taking away an object or tidying up the kitchen. When prioritizing measures, the planner takes decisions of the situation manager and / or of input devices such as a user input device, an appointment planner or an emergency control into account.
The task of the situation manager is to provide the planner with the measures which meet the needs of the person, for example hunger, thirst, stress reduction or needs of care and the service environment in a specific situation. The situation manager responds to the planner's request. The situation manager according to the present invention is divided into a situation network and an action network. The situation network is designed as an artificial neural network for making decisions about the situation-related needs, i.e. the needs in a specific situation. The situation-related needs represent the cumulative needs of the person in need of care and the service environment measured over time, i.e. the situation-related needs are based on the history of the needs.
CH 713 933 A2 The action network is an artificial neural network that derives the appropriate measures for the situation-related needs. Both the situation network and the action network are based on a probability model.
The subdivision of the situation manager into a situation network and an action network means that the calculation of the suitable measures for a specific situation is not based directly on the data of the information pool, rather it is based on the separate calculation of the needs for a specific situation.
The situation manager receives input from an information pool. The information pool includes information from sensors and ΙοΤ devices, a user database and a history. Sensors according to the present invention are, for example, a microphone, a camera, a touchpad. A loT device is, for example, a refrigerator or other intelligent devices. The user database is a collection of information about the people in need of care, for example their names, the current emotional state or the current position in the room. The history contains the historical data of the sensors and loT channels as well as the history of the conditions of the persons in need of care and the history of the actions of the robot. The information pool also has access to the communication channels of the Open Platform, for example to receive information about the battery status of the robot.
Before information from the information pool can be used by the situation manager, it must go through the feature preparation. The feature preparation relates to the classification or accumulation of information, for example the classification of voice signals via speech recognition, the classification of touches via tactile recognition, the classification of emotional states via facial expression recognition, the collection of information from intelligent devices for the detection of developments.
[0043] An input device can be a key with an associated function, a touchscreen. The scheduler is a schedule of measures that must be carried out regularly, e.g. bring the food, provide the medication. The emergency control is able to recognize undesired or negative events, e.g. Signs of rejection or resistance to the nursing robot or a low battery. The emergency control has access to the information pool.
The prioritization by the planner has the effect, for example, of following the current measure, i.e. continue to assign the highest priority to suspending the current measure, i.e. Assign it a lower priority, cancel the current measure, i.e. delete them from the list of measures, start a new measure or resume a previously interrupted measure.
Fig. 2a shows a flowchart showing the work flow of the robot in the monitoring mode. The process includes the following steps:
Step 1: Detecting a signal using a sensor. This step captures a signal or pattern that relates to the patient or service environment. The signals or signal patterns relate, for example, to a position signal, a speech pattern, an image pattern, a tactile pattern.
Step 2: analyze the signal. Through this step, the detected signal or pattern is interpreted or evaluated in an aggregated manner, for example to extract features using time series.
Step 3: classify the signal. This step classifies the analyzed features, for example by comparing the patterns with personalized patterns in the user DB, in order to derive the emotional state of the person, or to detect temporal developments of signals from ΙοΤ devices.
Step 4: Determine the needs of the person and the service environment with the help of the situation network. This step calculates the needs of the situation based on information from the information pool. The situation network is designed as an artificial neural network based on a probability model. The situation-related needs represent the cumulative needs of the person in need of care and the service environment measured over time. The calculation of the situation-related needs by the artificial neural network is therefore not only based on the actual needs, but also on the history of the needs.
Step 5: Determination of the measures to meet the needs determined by the situation network. This step calculates the appropriate measures for the needs of the situation. The action network is designed as an artificial neural network based on a probability model.
Step 6: Determination of measures that are triggered by an input device. This step defines the measures that are triggered by an input device. An input device is, for example, a button to arrange a specific maintenance action or a scheduler to trigger measures that have to be carried out regularly, or an emergency control.
Step 7: Prioritization of the measures by the planner. This step prioritizes measures according to an urgency level, e.g. from highest to lowest priority: (1) emergency measures, (2) measures ordered by the input device, (3) scheduled measures, (4) measures suggested by the situation manager.
CH 713 933 A2 Step 8: Execution of the measure of the highest priority. In this step the most urgent measure is carried out.
Step 9: Repeat steps (1) to (9) until a stop condition is reached. This step causes the robot to always do everything until it is stopped by an external stop command.
Fig. 2b shows a flowchart showing the workflow of the robot in the tactile interaction mode. The process includes the following steps:
Step 1: Detecting a tactile pattern by a sensor. This step detects a tactile pattern related to the patient.
Step 2: Analyze the tactile signal by an analysis unit. Through this step, the detected tactile pattern is interpreted or evaluated in an aggregated manner, for example to extract features using time series.
Step 3: Classify the tactile signal using personalized tactile patterns. This step classifies the analyzed features, for example by comparing the patterns with personalized patterns in the user DB, in order to derive the emotional state of the person, or to detect temporal developments of signals from ΙοΤ devices.
Step 4: Determine the needs of the person using the situation network. This step calculates the needs of the situation based on information from the information pool. The situation network is designed as an artificial neural network based on a probability model. The situation-related needs represent the cumulative needs of the person in need of care and the service environment measured over time. The calculation of the situation-related needs by the artificial neural network is therefore not only based on the actual needs, but also on the history of the needs.
Step 5: Determination of the measures to meet the needs determined by the situation network. This step calculates the appropriate measures for the needs of the situation. The action network is designed as an artificial neural network based on a probability model.
Step 6: Determination of measures that are triggered by an input device. This step defines the measures that are triggered by an input device. An input device is, for example, a button to arrange a specific maintenance action or a scheduler to trigger measures that have to be carried out regularly, or an emergency control.
Step 7: Prioritization of the measures by the planner. This step prioritizes measures according to an urgency level, e.g. from highest to lowest priority: (1) emergency measures, (2) measures ordered by the input device, (3) scheduled measures, (4) measures suggested by the situation manager.
Step 8: Execution of the measure of the highest priority. In this step the most urgent measure is carried out.
Step 9: Repeat steps (1) to (9) until a stop condition is reached. This step causes the robot to always do everything until it is stopped by an external stop command.
Fig. 2c is a flowchart showing the work flow of the robot in the social interaction mode. The process includes the following steps:
Step 1: Detection of an emotional state pattern by a sensor. This step captures an emotional state pattern related to the patient.
Step 2: Analyze the emotional state pattern by an analysis unit. Through this step, the detected emotional state pattern is interpreted or evaluated in an aggregated manner, for example in order to extract features using time series.
Step 3: Classify the emotional state pattern with the help of personalized emotional state patterns. This step classifies the analyzed features, for example by comparing the patterns with personalized patterns in the user DB, in order to derive the emotional state of the person, or to detect temporal developments of signals from ΙοΤ devices.
Step 4: Determine the needs of the person using the situation network. This step calculates the needs of the situation based on information from the information pool. The situation network is designed as an artificial neural network based on a probability model. The situation-related needs represent the cumulative needs of the person in need of care and the service environment measured over time. The calculation of the situation-related needs by the artificial neural network is therefore not only based on the actual needs, but also on the history of the needs.
Step 5: Determination of the measures to meet the needs determined by the situation network. This step calculates the appropriate measures for the needs of the situation. The action network is designed as an artificial neural network based on a probability model.
CH 713 933 A2 Step 6: Determination of measures which are triggered by an input device. This step defines the measures that are triggered by an input device. An input device is, for example, a button to arrange a specific maintenance action or a scheduler to trigger measures that have to be carried out regularly, or an emergency control.
Step 7: Prioritization of the measures by the planner. This step prioritizes measures according to an urgency level, e.g. from highest to lowest priority: (1) emergency measures, (2) measures ordered by the input device, (3) scheduled measures, (4) measures suggested by the situation manager.
Step 8: Execution of the highest priority measure. In this step the most urgent measure is carried out.
Step 9: Repeat steps (1) to (9) until a stop condition is reached. This step causes the robot to always do everything until it is stopped by an external stop command.
权利要求:
Claims (12)
[1]
claims
1. A method for detecting the emotional state of a person by a robot, the robot comprising: a situation manager, which is subdivided into a situation network for determining needs and an action network for determining measures to meet the needs
- A planner to prioritize tasks that are received by the situation manager and optionally by an input device
a sensor for detecting a tactile pattern comprising the following steps:
Step 1: Detect a tactile pattern by the sensor
Step 2: Analyze the tactile pattern with the help of an analysis unit
Step 3: Classify the tactile pattern using personalized tactile patterns that are stored in a user database
Step 4: Determine the needs using the situation network
Step 5: Determine the measures to meet the needs identified in step 4 with the help of the action network
Step 6: Determine measures that are triggered by the input device Step 7: Prioritize the measures by the planner Step 8: Execute the measure with the highest priority Step 9: Repeat steps (1) to (9)
[2]
2. The method of claim 1, wherein the input device is a user input device and / or an organizer and / or an emergency control.
[3]
3. The method according to claim 1 or 2, wherein the situation network and / or the action network are based on a probability model.
[4]
4. The method according to the preceding claims, wherein the situation manager receives information from an information pool, wherein the information pool on a sensor and / or on the Internet of Things and / or on a user database and / or on a history and / or on communication channels of the Open Platform uses.
[5]
5. The method of claim 4, wherein the information that the situation manager receives from the information pool is classified by a Merkmais preparation task.
[6]
6. Robot for performing the method according to claim 1 to 5, wherein the robot comprises a planner for prioritizing tasks that he receives from the situation manager and optionally from an input device and a sensor for detecting a tactile pattern, characterized in that the situation manager is divided into a situation network to determine needs and an action network to determine measures to meet needs.
[7]
7. Robot according to claim 6, wherein the input device is a user input device and / or an organizer and / or an emergency control.
[8]
8. Robot according to claim 6 or 7, wherein the situation network and / or the action network are based on a probability model.
[9]
9. Robot according to claim 6 to 8, wherein the situation manager receives information from an information pool, wherein the information pool on a sensor and / or on the Internet of Things and / or on a user database and / or on a history and / or Communication channels of the Open Platform.
[10]
10. The robot of claim 9, wherein the information that the situation manager receives from the information pool is classified by a Merkmais preparation task.
[11]
11. Robot according to claim 6 to 10, wherein the sensor has an area of at least 16 mm 2 .
CH 713 933 A2
[12]
12. Robot according to claim 6 to 11, wherein the sensor is embedded in a soft tactile envelope of the robot.
CH 713 933 A2
flow of information
CH 713 933 A2
CH 713 933 A2
CM .SP
CH 713 933 A2
Μ
PM _ώ
类似技术:
公开号 | 公开日 | 专利标题
CA2655189C|2016-01-26|Measuring cognitive load
WO2009056525A1|2009-05-07|Method for determining a destination call for use in a lift system and a lift system
CH713933A2|2018-12-28|Method for detecting the emotional state of a person by a robot.
Martínez et al.2010|Emotion elicitation oriented to the development of a human emotion management system for people with intellectual disabilities
DE112018000702T5|2019-11-14|ROBOTIC SYSTEM AND ROBOTIAL DIALOGUE PROCESS
Pfeiffer et al.2015|Aircraft in your head: How air traffic controllers mentally organize air traffic
McDuff et al.2012|AffectAura: Emotional wellbeing reflection system
EP2091023B1|2012-06-20|Method for generating an information signal for access requests and device for carrying out the method
Koldijk et al.2012|Unobtrusively measuring stress and workload of knowledge workers
JP2019124012A|2019-07-25|Automatic door
Kheratkar et al.2020|Gesture Controlled Home Automation using CNN
EP2697702B1|2015-08-12|Device and method for the gesture control of a screen in a control room
DE102018009612A1|2020-06-18|Process for creating user-specific, automated knowledge and learning models and for communicating them with the user
Nick et al.2007|A hybrid approach to intelligent living assistance
Farooqi2016|Methods for the investigation of work and human errors in rail engineering contexts
DE102021110674A1|2021-10-28|MOTION EVALUATION SYSTEM, MOTION EVALUATION DEVICE AND MOTION EVALUATION METHOD
JP2021039709A|2021-03-11|Platform danger degree determination program and system
DE102019212095A1|2021-02-18|Method for operating an elevator installation and also an elevator installation, an elevator control and a computer program product for executing such a method
Hockey et al.2006|Implementing adaptive automation using on-line detection of high risk operator functional state
Bläsing et al.2021|Cognitive compatibility in modern manual mixed-model assembly systems
Pimenta et al.2014|A non-invasive approach to detect and monitor acute mental fatigue
Nordin et al.2019|Usability study on human machine interface: In Volvo Cars body factory
Osvalder et al.2017|Human-machine systems
CN113742687A|2021-12-03|Internet of things control method and system based on artificial intelligence
Bisognano2004|What juran says
同族专利:
公开号 | 公开日
TWM581742U|2019-08-01|
CH713934B1|2020-05-29|
CH713932B1|2020-05-29|
TWM577790U|2019-05-11|
JP3226609U|2020-07-09|
JP3228266U|2020-10-22|
WO2018233859A1|2018-12-27|
WO2018233856A1|2018-12-27|
JP3227655U|2020-09-10|
TWM577958U|2019-05-11|
TWM581743U|2019-08-01|
CH713933B1|2020-05-29|
WO2018233857A1|2018-12-27|
JP3227656U|2020-09-10|
EP3641992A1|2020-04-29|
CN209207531U|2019-08-06|
CN209304585U|2019-08-27|
WO2018233858A1|2018-12-27|
CH713932A2|2018-12-28|
CH713934A2|2018-12-28|
US20200139558A1|2020-05-07|
CN109129526A|2019-01-04|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

DE2753946A1|1977-12-03|1979-06-13|Bayer Ag|1-N-ARYL-1,4-DIHYDROPYRIDINE AND THEIR USE AS A MEDICINAL PRODUCT|
JPS6039090A|1983-08-11|1985-02-28|Mitsubishi Electric Corp|Hand device for industrial robot|
JPH0377922B2|1984-09-14|1991-12-12|Tokyo Shibaura Electric Co|
JPH0319783A|1989-02-16|1991-01-28|Sanyo Electric Co Ltd|Workpiece holding mechanism|
JPH05381U|1991-06-17|1993-01-08|株式会社安川電機|Robot hand|
JPH06206187A|1992-06-10|1994-07-26|Hanshin Sharyo Kk|Nippingly holding of transferred article and device therefor|
JP3515299B2|1996-11-26|2004-04-05|西日本電線株式会社|Wire gripping tool|
EP0993916B1|1998-10-15|2004-02-25|Tecan Trading AG|Robot gripper|
AT303622T|2001-04-22|2005-09-15|Neuronics Ag|articulated robot|
JP2005131719A|2003-10-29|2005-05-26|Kawada Kogyo Kk|Walking type robot|
US8909370B2|2007-05-08|2014-12-09|Massachusetts Institute Of Technology|Interactive systems employing robotic companions|
JP2010284728A|2009-06-09|2010-12-24|Kawasaki Heavy Ind Ltd|Conveyance robot and automatic teaching method|
JP4834767B2|2009-12-10|2011-12-14|株式会社アイ.エス.テイ|Grasping device, fabric processing robot, and fabric processing system|
CH705297A1|2011-07-21|2013-01-31|Tecan Trading Ag|Gripping pliers with interchangeable gripper fingers.|
US20150314454A1|2013-03-15|2015-11-05|JIBO, Inc.|Apparatus and methods for providing a persistent companion device|
US9434076B2|2013-08-06|2016-09-06|Taiwan Semiconductor Manufacturing Co., Ltd.|Robot blade design|
JP6335587B2|2014-03-31|2018-05-30|株式会社荏原製作所|Substrate holding mechanism, substrate transfer device, semiconductor manufacturing device|
EP2933065A1|2014-04-17|2015-10-21|Aldebaran Robotics|Humanoid robot with an autonomous life capability|
EP2933064A1|2014-04-17|2015-10-21|Aldebaran Robotics|System, method and computer program product for handling humanoid robot interaction with human|
JP6593991B2|2014-12-25|2019-10-23|三菱重工業株式会社|Mobile robot and tip tool|
CN106325112B|2015-06-25|2020-03-24|联想有限公司|Information processing method and electronic equipment|CN109807903B|2019-04-10|2021-04-02|博众精工科技股份有限公司|Robot control method, device, equipment and medium|
法律状态:
2019-04-30| AZW| Rejection (application)|
2019-05-31| NV| New agent|Representative=s name: INTELLECTUAL PROPERTY SERVICES GMBH, CH |
优先权:
申请号 | 申请日 | 专利标题
US201762521686P| true| 2017-06-19|2017-06-19|
PCT/EP2017/075575|WO2018233857A1|2017-06-19|2017-10-06|Method for detecting the emotional state of a person by a robot|
[返回顶部]